Skip to content

feat(skills): add 7 CS academic writing and resume building skills#1488

Open
f3r21 wants to merge 4 commits intoaffaan-m:mainfrom
f3r21:claude/improve-seven-skills-sdmBs
Open

feat(skills): add 7 CS academic writing and resume building skills#1488
f3r21 wants to merge 4 commits intoaffaan-m:mainfrom
f3r21:claude/improve-seven-skills-sdmBs

Conversation

@f3r21
Copy link
Copy Markdown

@f3r21 f3r21 commented Apr 18, 2026

What Changed

Adds 7 new skills in two themed clusters, plus the manifest + CI wiring to make them installable via `./install.sh --profile full`.

CS academic writing (4 skills — JSONL output, ajv-validated)

  • `sentence-clarity-cs` — sentence-level clarity editor (length, passive voice, ambiguous pronouns, nominalization, hedge language)
  • `abstract-methods-results-cs` — reviewer for Abstract / Methods / Results sections
  • `paper-structure-cs` — validates section presence, order, heading hierarchy, bibliography completeness
  • `academic-final-review-cs` — pre-submission checklist (now JSONL, was pipe-delimited pre-refactor)

Resume building (3 skills — single JSON object output)

  • `harvard-resume-validator` — Harvard College guidelines (GPA ≥ 3.7, action verbs, formatting)
  • `kickass-resume-validator` — industry "Kickass" guidelines (one-page, ATS-compliant, situation-aware)
  • `resume-job-alignment` — scores resume-to-JD fit and proposes targeted bullet rewrites

Shared helper

  • `skills/_shared/resume-common.md` — extracted ~60% overlap between the three resume skills (action verbs, ATS pitfalls, one-page rules). Underscore-prefixed by convention (not itself a skill).

Per-skill infrastructure (every skill ships)

  • `schema/output.schema.json` — JSON Schema draft-07, strict (`additionalProperties: false`, closed enums)
  • `evals/evals.json` — 3 evals per skill (21 total), each with `prompt` + `expected_output` rubric
  • All example output objects inside SKILL.md validate against their schema under ajv

Install manifest + CI (this commit only)

  • Two new install modules: `academic-writing`, `resume-toolkit`
  • Two new components: `capability:academic-writing`, `capability:resume-toolkit`
  • Wired into `full` profile; `academic-writing` also in `research`
  • `package.json` `files` allowlist extended to cover the 7 skills + `_shared/`
  • `validate-skills.js` now skips underscore-prefixed dirs so `_shared/` isn't required to ship a SKILL.md
  • README + AGENTS skill counts synced 183 → 190 (English + zh-CN)

Why This Change

Two gaps in the current ECC surface:

  1. No CS research paper authoring skills. Academic CS writing has very specific failure modes (passive voice, nominalization chains, missing section ordering, vague abstracts without metrics) that general writing skills don't catch. These four form a pipeline: sentence-clarity → section review → structural validation → final checklist.
  2. No resume/career skills. Two different resume-review traditions (Harvard academic, industry "kickass") plus a job-alignment scorer cover the main resume-help requests without pushing users toward fabrication on gaps.

Every skill ships with a strict JSON schema and 3 evals so outputs are machine-checkable and regression-tested.

Testing Done

  • Automated tests pass locally (`node tests/run-all.js` → 1867 / 1867 ✅)
  • Edge cases considered and tested
  • Manual testing completed

Eval results (pass@3, live skill invocations)

Schema validation (ajv strict):

Skill Valid / Total
sentence-clarity-cs 9/9
abstract-methods-results-cs 33/33
paper-structure-cs 9/9
academic-final-review-cs 39/39
harvard-resume-validator 3/3
kickass-resume-validator 3/3
resume-job-alignment 3/3

21 trials · 99 objects · 100% schema-valid · pass^3 = 7/7 skills · rubric spot-check 7/7

Install verification

`./install.sh --profile full` from this branch now copies all 7 skills + `_shared/` to `~/.claude/skills/`. Prior to this PR, the manifest-driven installer silently skipped them.

Type of Change

  • `feat:` New feature (7 skills + install wiring)
  • `refactor:` The 48338e5 commit on this branch hardened formats (JSONL unification, schema addition, `_shared/` extraction)

Security & Quality Checklist

  • No secrets or API keys committed
  • JSON files validate cleanly (all 14 skill JSON files + 3 manifest JSON files parse; 7 schemas compile under ajv)
  • Pre-commit hooks pass locally
  • No sensitive data exposed in logs or output
  • Follows conventional commits format

Documentation

  • Updated relevant documentation (README counts 183 → 190, AGENTS counts)
  • Added inline examples inside every SKILL.md; all validate against their schema
  • README updated

Commits on this branch:

  • `6061c76` feat: add 7 skills for CS academic writing and resume building
  • `48338e5` refactor(skills): align and harden 7 CS paper and resume skills
  • `633edc0` feat(install): register CS academic and resume skills for installer and CI

Summary by cubic

Adds 7 new skills for CS academic writing and resume building, each with strict schemas and evals, and wires them into the installer so ./install.sh --profile full includes them (academic-writing also in research). Increases the skill count from 183 to 190.

  • New Features

    • CS academic writing (JSONL): sentence-clarity-cs, abstract-methods-results-cs, paper-structure-cs, academic-final-review-cs.
    • Resume toolkit (single JSON): harvard-resume-validator, kickass-resume-validator, resume-job-alignment.
    • Shared resume guidance extracted to skills/_shared/resume-common.md.
  • Refactors

    • Unified outputs and quality gates: JSONL for CS skills, strict JSON Schema (additionalProperties: false, closed enums) for all skills; examples and 21 evals validate under ajv.
    • Installer and manifests: new modules academic-writing, resume-toolkit and components capability:academic-writing, capability:resume-toolkit; enabled in full (both) and research (academic-writing) profiles.
    • Tooling and docs: extended package.json files allowlist; scripts/ci/validate-skills.js now skips underscore-prefixed dirs; README/AGENTS counts updated to 190 skills (EN + zh-CN).

Written for commit 633edc0. Summary will update on new commits.

Summary by CodeRabbit

Release Notes

  • New Features

    • Added 7 new skills expanding total capacity from 183 to 190 skills
    • Introduced Academic Writing module with tools for CS paper sections (abstract, methods, results, clarity, and final review)
    • Introduced Resume Toolkit with validators aligned to Harvard and Kickass standards, plus job description alignment analysis
    • Added new "research" and "full" installation profiles bundling academic and resume tools
  • Documentation

    • Updated all documentation to reflect expanded skill catalog

f3r21 and others added 4 commits April 16, 2026 08:06
Seven skills from the "CS academic writing and resume building" batch
drifted from each other in output format, had large amounts of
duplicated content between sibling validators, leaked third-party
platform branding into descriptions, and were missing evals/schemas
needed to regression-test them. This pass aligns them to the repo
pattern documented in docs/SKILL-DEVELOPMENT-GUIDE.md.

CS paper pipeline (paper-structure-cs, abstract-methods-results-cs,
sentence-clarity-cs, academic-final-review-cs):
- academic-final-review-cs converted from pipe-delimited to JSONL so
  all four CS skills share a parseable output format.
- Each SKILL.md now shows a concrete example object with every
  required field; duplicated sections trimmed (notably sentence
  clarity's repeated output block and paper-structure's heading-
  convention prose that duplicated the JSONL categories).
- Removed the underspecified `depends_on` field from paper-structure
  output; issues are ordered instead.
- Added cross-links so each skill points to its place in the
  structure -> sections -> sentences -> final-review pipeline.

Resume pipeline (harvard-resume-validator, kickass-resume-validator,
resume-job-alignment):
- Removed all "Cowork" platform references from descriptions and
  bodies; generic resume skills no longer name a specific third-party
  tool.
- Dropped non-standard `compatibility:` frontmatter field; added
  `tags:` per the documented schema.
- Extracted shared content (action-verb bank, LaTeX ATS pitfalls,
  one-page formatting rules, weak-to-strong phrase map) into
  skills/_shared/resume-common.md; both validators now link there
  instead of inlining ~60% duplicated prose.
- Each validator keeps only what is specific to it and surfaces a
  "Differs from <sibling>" callout so the GPA-threshold and section-
  ordering disagreements are visible rather than buried.
- resume-job-alignment trimmed from 618 lines by linking validators
  rather than restating their rules; alignment-specific content
  (keyword extraction, scenarios, before/after) preserved.

Schemas and evals added to every skill:
- schema/output.schema.json for all 7 skills (JSON Schema draft-07,
  ajv-validated; doc examples validated against their schemas).
- evals/evals.json added for academic-final-review-cs (rewritten for
  JSONL), harvard-resume-validator, kickass-resume-validator, and
  resume-job-alignment; the three skills that already had evals are
  retained unchanged where possible.

Verification:
- All 14 JSON files parse; all 7 schemas compile under ajv.
- Every SKILL.md example object validates against its schema.
- `npx markdownlint-cli` passes on all 7 skills plus _shared under
  the repo's .markdownlint.json (fixed a pre-existing MD025 in
  abstract-methods-results-cs).
- Frontmatter on every skill contains only documented keys (name,
  description, origin, tags).

https://claude.ai/code/session_0189fb3hKKS8jmCwCMp4EFsX
…nd CI

Adds two install modules (academic-writing, resume-toolkit) to the
manifest so ./install.sh --profile full picks up the 7 CS academic and
resume skills added in 48338e5. Also wires academic-writing into the
research profile and adds matching components.

- Extend npm publish files allowlist with the 7 skill paths and
  skills/_shared/
- Teach validate-skills.js to skip underscore-prefixed dirs so
  skills/_shared/ is not required to ship a SKILL.md
- Sync README/AGENTS skill counts (183 to 190) in English and zh-CN
@ecc-tools
Copy link
Copy Markdown
Contributor

ecc-tools bot commented Apr 18, 2026

ECC bundle files are already tracked in this repository. Skipping generation of another bundle PR.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 18, 2026

📝 Walkthrough

Walkthrough

This PR expands the plugin's skill collection from 183 to 190 by introducing two new skill modules: academic-writing (4 skills for CS paper review) and resume-toolkit (3 skills for resume validation). Documentation, manifests, and package configuration are updated to reflect this expansion. The validation script is modified to skip underscore-prefixed skill directories.

Changes

Cohort / File(s) Summary
Documentation Count Updates
AGENTS.md, README.md, README.zh-CN.md, docs/zh-CN/AGENTS.md, docs/zh-CN/README.md
Updated skill inventory count from 183 to 190 across English and Chinese documentation.
Manifest & Profile Definitions
manifests/install-components.json, manifests/install-modules.json, manifests/install-profiles.json
Added two new capability components (academic-writing, resume-toolkit) with module definitions and integrated them into research and full install profiles.
Package Publishing Configuration
package.json
Updated files whitelist with new skill directories for academic-writing and resume-toolkit modules, reordered existing entries.
Validation Script
scripts/ci/validate-skills.js
Added early continue to skip directories with leading underscore prefix during skill validation.
Shared Resume Resources
skills/_shared/resume-common.md
Added deduplicated guidance document for resume writing, including action-verb banks, ATS pitfalls, LaTeX preambles, and standard resume sections.
Academic Writing Skills
skills/abstract-methods-results-cs/*, skills/academic-final-review-cs/*, skills/paper-structure-cs/*, skills/sentence-clarity-cs/*
Added four new CS paper-review skills with SKILL.md specifications, JSON evaluation datasets, and output schema definitions for validating abstract/methods/results sections, final pre-submission checklists, paper structure/organization, and sentence-level clarity.
Resume Validation Skills
skills/harvard-resume-validator/*, skills/kickass-resume-validator/*, skills/resume-job-alignment/*
Added three new resume validation skills with SKILL.md specifications, evaluation datasets, and output schemas for Harvard College standards, Kickass Resume guidelines, and resume-to-job-posting alignment analysis.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

  • PR #748 — Modifies scripts/ci/validate-skills.js to filter validated skill directories (both PRs change directory-iteration logic)
  • PR #537 — Updates install manifests (install-components.json, install-modules.json, install-profiles.json) to register new modules
  • PR #960 — Modifies package.json files whitelist for skill directory publishing

Suggested reviewers

  • affaan-m

Poem

🐰 Seven skills hop in, rabbits celebrate with glee,
From Harvard resumes to papers in clarity,
Academic and aligned, a bundled delight,
The skill garden grows—190 shines bright! ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly summarizes the main change: addition of 7 new CS academic writing and resume building skills, which aligns with the substantive additions of academic-writing and resume-toolkit modules throughout the changeset.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Apr 18, 2026

Greptile Summary

Adds 7 new skills in two clusters — 4 CS academic writing helpers (sentence-clarity-cs, abstract-methods-results-cs, paper-structure-cs, academic-final-review-cs) and 3 resume skills (harvard-resume-validator, kickass-resume-validator, resume-job-alignment) — along with a shared _shared/resume-common.md helper, strict draft-07 JSON schemas, 3 evals per skill, and installer/CI wiring. All skills ship with additionalProperties: false schemas and the CI validator is updated to skip underscore-prefixed directories.

Confidence Score: 5/5

Safe to merge; all findings are P2 style/naming suggestions with no correctness or schema-validity impact.

No P0 or P1 findings. The three P2 comments are: (1) a naming inconsistency in top_3_fixes schema constraints, (2) an ambiguous LaTeX-check table in academic-final-review-cs SKILL.md, and (3) a missing inline comment in validate-skills.js. None affect runtime behavior, schema validation, or installation correctness.

skills/academic-final-review-cs/SKILL.md (LaTeX table ambiguity) and skills/harvard-resume-validator/schema/output.schema.json + skills/kickass-resume-validator/schema/output.schema.json (top_3_fixes naming vs. minItems constraint)

Important Files Changed

Filename Overview
skills/sentence-clarity-cs/SKILL.md New skill: sentence-level clarity editor for CS papers. Well-structured JSONL output, eight problem_type categories align exactly with the schema enum, clear examples and anti-pattern tables.
skills/academic-final-review-cs/SKILL.md New skill: pre-submission checklist. The LaTeX-specific check table uses identical formatting to the canonical 25-item list but none of those 6 items exist in the schema enum, which could mislead future maintainers or models.
skills/harvard-resume-validator/schema/output.schema.json Strict schema (additionalProperties:false, closed enums). top_3_fixes allows minItems:1, maxItems:3 while the field name and SKILL prose imply exactly 3 — same issue exists in kickass-resume-validator schema.
scripts/ci/validate-skills.js One-line addition to skip underscore-prefixed directories. Works correctly for _shared/ but the convention is implicit; no comment documents why _ prefixes are excluded.
manifests/install-modules.json Two new modules (academic-writing, resume-toolkit) wired correctly. resume-toolkit includes skills/_shared so relative ../_shared/ links in resume skill SKILL.md files resolve after install.
manifests/install-profiles.json academic-writing added to research and full profiles; resume-toolkit added only to full. Clean and consistent with PR description.
skills/resume-job-alignment/SKILL.md New skill: resume-to-job-description alignment scorer with before/after bullet rewrites. Well-scoped, references _shared, recommends running validators first.
skills/kickass-resume-validator/schema/output.schema.json Same top_3_fixes minItems/maxItems mismatch as harvard schema. Otherwise well-structured with situation enum and ats_compliance fields.
skills/_shared/resume-common.md Shared reference material (action verbs, ATS pitfalls, LaTeX preamble). Correctly underscore-prefixed so validator skips it.
manifests/install-components.json Two new capability components registered. Each maps to exactly one module. Clean addition.

Reviews (1): Last reviewed commit: "feat(install): register CS academic and ..." | Re-trigger Greptile

Comment on lines +134 to +145

If the paper is written in LaTeX, additionally check:

| Item | What to verify |
|------|-----------------|
| No overfull/underfull boxes | Compile with `pdflatex` and check `.log` for warnings; fix line breaks |
| No widow/orphan lines | Last line of paragraph appears alone; first line appears alone at page break |
| Float placement | Figures/tables appear reasonably close to first citation (same page or next) |
| Bibliography style matches | `.bst` file (plain, acm, ieeetr, etc.) matches venue requirements |
| Package conflicts | No conflicting packages (e.g., both `geometry` and `fullpage`); check console |
| PDF metadata | PDF title, author, and keywords match document content |

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 LaTeX-specific items not representable in schema output

The LaTeX-specific checks table (lines 134–145) lists 6 items (No overfull/underfull boxes, No widow/orphan lines, Float placement, Bibliography style matches, Package conflicts, PDF metadata) in the same table format as the canonical checklist, but none of them appear in the item enum of schema/output.schema.json. The "Output Requirements" section further says "Do NOT add custom items; stick to the canonical list."

A future maintainer or a model reading this section in isolation could reasonably conclude these items should appear as output objects — but any attempt to emit them would produce schema-invalid JSONL. A comment or explicit note (e.g., "Use these to inform guidance on the Font and margins and Headings consistent canonical items — do not emit them as separate JSONL lines") would prevent the ambiguity.

Comment on lines +28 to +32
"type": "array",
"items": { "type": "string" },
"minItems": 1,
"maxItems": 3
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 top_3_fixes name implies three items but schema allows 1–3

The field is named top_3_fixes, but "minItems": 1, "maxItems": 3 means it can legally contain just one or two items. For a resume with very few issues, an LLM might return a single-item array and be fully schema-valid, which clashes with the name and the SKILL.md narrative ("Three top fixes"). The same constraint exists in kickass-resume-validator/schema/output.schema.json. Consider renaming to top_fixes (and updating the SKILL.md prose) or locking it to "minItems": 3, "maxItems": 3.

let validCount = 0;

for (const dir of dirs) {
if (dir.startsWith('_')) continue;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Underscore-skip is a global convention with no documentation

Suggested change
if (dir.startsWith('_')) continue;
if (dir.startsWith('_')) continue; // skip shared/helper dirs (not skills themselves)

The single-line addition silently extends the _ prefix to mean "not a skill, skip validation" for any future directory. Adding an inline comment makes the convention explicit so future contributors don't accidentally name a real skill _my-skill/ and wonder why CI never validates it.

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (3)
scripts/ci/validate-skills.js (1)

24-26: Keep _shared exempt from SKILL.md, but still validate its contents.

This blanket skip means skills/_shared/resume-common.md can be deleted or emptied while dependent skills still pass CI. Add a lightweight shared-resource check before continue, or restrict the skip to known shared directories and verify expected files are readable/non-empty.

🧪 Proposed validation guard for shared skill resources
   for (const dir of dirs) {
-    if (dir.startsWith('_')) continue;
+    if (dir.startsWith('_')) {
+      const sharedDir = path.join(SKILLS_DIR, dir);
+      const markdownFiles = fs
+        .readdirSync(sharedDir, { withFileTypes: true })
+        .filter(e => e.isFile() && e.name.endsWith('.md'));
+
+      if (markdownFiles.length === 0) {
+        console.error(`ERROR: ${dir}/ - Shared directory has no markdown resources`);
+        hasErrors = true;
+        continue;
+      }
+
+      for (const file of markdownFiles) {
+        const filePath = path.join(sharedDir, file.name);
+        const content = fs.readFileSync(filePath, 'utf-8');
+        if (content.trim().length === 0) {
+          console.error(`ERROR: ${dir}/${file.name} - Empty file`);
+          hasErrors = true;
+        }
+      }
+      continue;
+    }
     const skillMd = path.join(SKILLS_DIR, dir, 'SKILL.md');

Based on learnings, place curated skills in the skills/ directory; generated/imported skills go under ~/.claude/skills/.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/ci/validate-skills.js` around lines 24 - 26, The loop that skips
directories starting with '_' (dirs and the skillMd variable built from
SKILLS_DIR and path.join) currently allows skills/_shared to be ignored
entirely; change the logic so that before the blanket "continue" you
special-case the "_shared" folder: when dir === '_shared' perform a lightweight
validation (e.g., check that expected shared files such as "resume-common.md"
exist, are readable and non-empty under path.join(SKILLS_DIR, '_shared', ...));
if those checks fail, throw or mark CI failure, otherwise skip SKILL.md
validation for _shared; other underscore-prefixed dirs should continue to be
skipped as before.
skills/kickass-resume-validator/schema/output.schema.json (1)

19-27: Harden required string fields with minLength: 1.

file_analyzed, alignment_to_kickass, ats_compliance, and summary are required but can still be empty strings.

Proposed hardening
-    "file_analyzed": { "type": "string" },
+    "file_analyzed": { "type": "string", "minLength": 1 },
@@
-    "alignment_to_kickass": { "type": "string" },
-    "ats_compliance": { "type": "string" },
-    "summary": { "type": "string" },
+    "alignment_to_kickass": { "type": "string", "minLength": 1 },
+    "ats_compliance": { "type": "string", "minLength": 1 },
+    "summary": { "type": "string", "minLength": 1 },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/kickass-resume-validator/schema/output.schema.json` around lines 19 -
27, The schema allows required string fields to be empty; add "minLength": 1 to
the string definitions for file_analyzed, alignment_to_kickass, ats_compliance,
and summary in output.schema.json so these fields cannot be empty strings
(update the properties for "file_analyzed", "alignment_to_kickass",
"ats_compliance", and "summary" to include minLength: 1 while leaving existing
types/enums intact and ensuring the overall JSON Schema remains valid).
skills/harvard-resume-validator/schema/output.schema.json (1)

10-12: Add minLength: 1 to required text fields for parity and quality.

file_analyzed, alignment_to_harvard, and summary should not validate as empty strings.

Proposed hardening
-    "file_analyzed": { "type": "string" },
-    "alignment_to_harvard": { "type": "string" },
-    "summary": { "type": "string" },
+    "file_analyzed": { "type": "string", "minLength": 1 },
+    "alignment_to_harvard": { "type": "string", "minLength": 1 },
+    "summary": { "type": "string", "minLength": 1 },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/harvard-resume-validator/schema/output.schema.json` around lines 10 -
12, The schema currently allows empty strings for the text fields; add
"minLength": 1 to each string property to prevent empty values — update the JSON
schema properties "file_analyzed", "alignment_to_harvard", and "summary" in
output.schema.json to include "minLength": 1 alongside "type": "string" so they
fail validation for empty strings.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@README.md`:
- Line 1331: The Skills table row currently shows Codex with "10 (native
format)" which is inconsistent with the Codex section above reporting 30 skills;
update the table cell in the row string "| **Skills** | 190 | Shared | 10
(native format) | 37 |" to read "30 (native format)" so the Codex skills count
matches the earlier "Codex" section.

In `@skills/abstract-methods-results-cs/schema/output.schema.json`:
- Around line 18-32: The schema currently allows any problem_type enum
regardless of section; add JSON Schema conditional rules using if/then that
check the document's section (e.g., if: { properties: { section: { const:
"introduction" } }, required: ["section"] }) and in each then: restrict the
problem_type enum to the allowed subset for that section (reference the existing
"problem_type" enum values like "vague_objective","missing_context", etc.),
repeating one if/then per section to enforce the documented mappings; ensure the
top-level schema keeps the original "problem_type" definition but to be
validated narrow it inside the respective then blocks so mismatched
section/category combinations are rejected.

In `@skills/academic-final-review-cs/SKILL.md`:
- Around line 250-256: Update the "**Binary thinking**" bullet to remove the
claim that items are strictly binary and instead explain the three allowed
evaluator states (PASS/FAIL/WARN) and how they map to presence/quality (e.g.,
PASS = present and acceptable, WARN = present but needs improvement, FAIL =
missing or blocking). Locate the "**Binary thinking**" bullet in SKILL.md and
replace its text with a concise rule that prevents evaluator drift by defining
the three states and giving one-line guidance on when to use each.

In `@skills/resume-job-alignment/schema/output.schema.json`:
- Around line 19-82: The schema currently allows empty strings/arrays (e.g.,
"job_title", "resume_analyzed", "overall_alignment", the properties inside
"alignment_breakdown", and array fields "key_matches", "gaps",
"tailoring_suggestions", "priority_roadmap"), which permits vacuous outputs;
update those required string properties (job_title, resume_analyzed,
overall_alignment, alignment_breakdown.required_qualifications, nice_to_have,
technical_skills, soft_skills, domain_knowledge, and all string props inside gap
items and tailoring_suggestions items) to include "minLength": 1, and add
"minItems": 1 to arrays that must return at least one element (key_matches,
gaps, tailoring_suggestions, priority_roadmap) so the validator rejects
empty/blank results while keeping existing "additionalProperties": false and the
"priority" enum unchanged.

In `@skills/sentence-clarity-cs/evals/evals.json`:
- Around line 6-7: The expected_output entry mistakenly asserts "47+ words" for
the first sentence length; update the "expected_output" value in the evals.json
block so it no longer contains the inaccurate numeric count—either replace "47+
words → shortened" with a non-numeric descriptor like "long sentence →
shortened" or compute and insert the correct word count, and ensure the string
still references the other checks (weak_verb, ambiguous_pronoun) so the keys
"prompt" and "expected_output" remain consistent.

---

Nitpick comments:
In `@scripts/ci/validate-skills.js`:
- Around line 24-26: The loop that skips directories starting with '_' (dirs and
the skillMd variable built from SKILLS_DIR and path.join) currently allows
skills/_shared to be ignored entirely; change the logic so that before the
blanket "continue" you special-case the "_shared" folder: when dir === '_shared'
perform a lightweight validation (e.g., check that expected shared files such as
"resume-common.md" exist, are readable and non-empty under path.join(SKILLS_DIR,
'_shared', ...)); if those checks fail, throw or mark CI failure, otherwise skip
SKILL.md validation for _shared; other underscore-prefixed dirs should continue
to be skipped as before.

In `@skills/harvard-resume-validator/schema/output.schema.json`:
- Around line 10-12: The schema currently allows empty strings for the text
fields; add "minLength": 1 to each string property to prevent empty values —
update the JSON schema properties "file_analyzed", "alignment_to_harvard", and
"summary" in output.schema.json to include "minLength": 1 alongside "type":
"string" so they fail validation for empty strings.

In `@skills/kickass-resume-validator/schema/output.schema.json`:
- Around line 19-27: The schema allows required string fields to be empty; add
"minLength": 1 to the string definitions for file_analyzed,
alignment_to_kickass, ats_compliance, and summary in output.schema.json so these
fields cannot be empty strings (update the properties for "file_analyzed",
"alignment_to_kickass", "ats_compliance", and "summary" to include minLength: 1
while leaving existing types/enums intact and ensuring the overall JSON Schema
remains valid).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a043fa95-f601-469d-9f95-c02d41b56bb3

📥 Commits

Reviewing files that changed from the base of the PR and between 1a50145 and 633edc0.

📒 Files selected for processing (32)
  • AGENTS.md
  • README.md
  • README.zh-CN.md
  • docs/zh-CN/AGENTS.md
  • docs/zh-CN/README.md
  • manifests/install-components.json
  • manifests/install-modules.json
  • manifests/install-profiles.json
  • package.json
  • scripts/ci/validate-skills.js
  • skills/_shared/resume-common.md
  • skills/abstract-methods-results-cs/SKILL.md
  • skills/abstract-methods-results-cs/evals/evals.json
  • skills/abstract-methods-results-cs/schema/output.schema.json
  • skills/academic-final-review-cs/SKILL.md
  • skills/academic-final-review-cs/evals/evals.json
  • skills/academic-final-review-cs/schema/output.schema.json
  • skills/harvard-resume-validator/SKILL.md
  • skills/harvard-resume-validator/evals/evals.json
  • skills/harvard-resume-validator/schema/output.schema.json
  • skills/kickass-resume-validator/SKILL.md
  • skills/kickass-resume-validator/evals/evals.json
  • skills/kickass-resume-validator/schema/output.schema.json
  • skills/paper-structure-cs/SKILL.md
  • skills/paper-structure-cs/evals/evals.json
  • skills/paper-structure-cs/schema/output.schema.json
  • skills/resume-job-alignment/SKILL.md
  • skills/resume-job-alignment/evals/evals.json
  • skills/resume-job-alignment/schema/output.schema.json
  • skills/sentence-clarity-cs/SKILL.md
  • skills/sentence-clarity-cs/evals/evals.json
  • skills/sentence-clarity-cs/schema/output.schema.json

Comment thread README.md
| **Agents** | 48 | Shared (AGENTS.md) | Shared (AGENTS.md) | 12 |
| **Commands** | 79 | Shared | Instruction-based | 31 |
| **Skills** | 183 | Shared | 10 (native format) | 37 |
| **Skills** | 190 | Shared | 10 (native format) | 37 |
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Keep the Codex skills count consistent in this row.

Line 1331 still says Codex has 10 (native format) skills, but the Codex section above reports 30 skills at Line 1135. Since this row is already being updated, align the Codex count here too.

📝 Proposed docs fix
-| **Skills** | 190 | Shared | 10 (native format) | 37 |
+| **Skills** | 190 | Shared | 30 (native format) | 37 |
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
| **Skills** | 190 | Shared | 10 (native format) | 37 |
| **Skills** | 190 | Shared | 30 (native format) | 37 |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README.md` at line 1331, The Skills table row currently shows Codex with "10
(native format)" which is inconsistent with the Codex section above reporting 30
skills; update the table cell in the row string "| **Skills** | 190 | Shared |
10 (native format) | 37 |" to read "30 (native format)" so the Codex skills
count matches the earlier "Codex" section.

Comment on lines +18 to +32
"problem_type": {
"type": "string",
"enum": [
"vague_objective",
"missing_context",
"unsupported_claim",
"passive_overuse",
"incomplete_description",
"reproducibility_gap",
"missing_detail",
"undefined_terms",
"premature_conclusion",
"incomplete_data"
]
},
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Bind problem_type to section with conditional schema rules.

Right now, section/category mismatches are schema-valid. Add if/then constraints so each section only permits its documented categories.

Proposed schema tightening
   "properties": {
@@
     "problem_type": {
       "type": "string",
       "enum": [
         "vague_objective",
         "missing_context",
         "unsupported_claim",
         "passive_overuse",
         "incomplete_description",
         "reproducibility_gap",
         "missing_detail",
         "undefined_terms",
         "premature_conclusion",
         "incomplete_data"
       ]
     },
@@
-  }
+  },
+  "allOf": [
+    {
+      "if": { "properties": { "section": { "const": "abstract" } } },
+      "then": {
+        "properties": {
+          "problem_type": {
+            "enum": ["vague_objective", "missing_context", "unsupported_claim", "passive_overuse"]
+          }
+        }
+      }
+    },
+    {
+      "if": { "properties": { "section": { "const": "methods" } } },
+      "then": {
+        "properties": {
+          "problem_type": {
+            "enum": ["incomplete_description", "reproducibility_gap", "passive_overuse", "missing_detail", "undefined_terms"]
+          }
+        }
+      }
+    },
+    {
+      "if": { "properties": { "section": { "const": "results" } } },
+      "then": {
+        "properties": {
+          "problem_type": {
+            "enum": ["premature_conclusion", "unsupported_claim", "passive_overuse", "missing_context", "incomplete_data"]
+          }
+        }
+      }
+    }
+  ]
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/abstract-methods-results-cs/schema/output.schema.json` around lines 18
- 32, The schema currently allows any problem_type enum regardless of section;
add JSON Schema conditional rules using if/then that check the document's
section (e.g., if: { properties: { section: { const: "introduction" } },
required: ["section"] }) and in each then: restrict the problem_type enum to the
allowed subset for that section (reference the existing "problem_type" enum
values like "vague_objective","missing_context", etc.), repeating one if/then
per section to enforce the documented mappings; ensure the top-level schema
keeps the original "problem_type" definition but to be validated narrow it
inside the respective then blocks so mismatched section/category combinations
are rejected.

Comment on lines +250 to +256
- **Binary thinking**: Each item is binary (present/absent, consistent/inconsistent). Avoid ambiguous status.
- **Actionability**: Guidance must tell author exactly what to fix (line numbers, specific changes)
- **Completeness**: Perform the full checklist; don't skip items
- **Pre-submission focus**: This is the final check before sending to conference/journal; be thorough
- **Venue awareness**: Ask about target venue if not obvious; tailor checks (ACM vs. IEEE vs. arXiv)
- **Severity reporting**: Flag FAIL for blocking issues (missing sections, formatting violations); WARN for improvements (vague captions, minor inconsistencies)

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Resolve checklist-state contradiction (binary vs PASS/FAIL/WARN).

This section says each item is binary, but the contract explicitly allows three states. Please reword this to avoid evaluator drift.

Suggested wording update
-- **Binary thinking**: Each item is binary (present/absent, consistent/inconsistent). Avoid ambiguous status.
+- **Deterministic statusing**: Assign exactly one status per item (`PASS`, `FAIL`, or `WARN`) based on clear evidence; avoid ambiguous judgments.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- **Binary thinking**: Each item is binary (present/absent, consistent/inconsistent). Avoid ambiguous status.
- **Actionability**: Guidance must tell author exactly what to fix (line numbers, specific changes)
- **Completeness**: Perform the full checklist; don't skip items
- **Pre-submission focus**: This is the final check before sending to conference/journal; be thorough
- **Venue awareness**: Ask about target venue if not obvious; tailor checks (ACM vs. IEEE vs. arXiv)
- **Severity reporting**: Flag FAIL for blocking issues (missing sections, formatting violations); WARN for improvements (vague captions, minor inconsistencies)
- **Deterministic statusing**: Assign exactly one status per item (`PASS`, `FAIL`, or `WARN`) based on clear evidence; avoid ambiguous judgments.
- **Actionability**: Guidance must tell author exactly what to fix (line numbers, specific changes)
- **Completeness**: Perform the full checklist; don't skip items
- **Pre-submission focus**: This is the final check before sending to conference/journal; be thorough
- **Venue awareness**: Ask about target venue if not obvious; tailor checks (ACM vs. IEEE vs. arXiv)
- **Severity reporting**: Flag FAIL for blocking issues (missing sections, formatting violations); WARN for improvements (vague captions, minor inconsistencies)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/academic-final-review-cs/SKILL.md` around lines 250 - 256, Update the
"**Binary thinking**" bullet to remove the claim that items are strictly binary
and instead explain the three allowed evaluator states (PASS/FAIL/WARN) and how
they map to presence/quality (e.g., PASS = present and acceptable, WARN =
present but needs improvement, FAIL = missing or blocking). Locate the "**Binary
thinking**" bullet in SKILL.md and replace its text with a concise rule that
prevents evaluator drift by defining the three states and giving one-line
guidance on when to use each.

Comment on lines +19 to +82
"job_title": { "type": "string" },
"resume_analyzed": { "type": "string" },
"overall_alignment": { "type": "string" },
"alignment_breakdown": {
"type": "object",
"required": [
"required_qualifications",
"nice_to_have",
"technical_skills",
"soft_skills",
"domain_knowledge"
],
"additionalProperties": false,
"properties": {
"required_qualifications": { "type": "string" },
"nice_to_have": { "type": "string" },
"technical_skills": { "type": "string" },
"soft_skills": { "type": "string" },
"domain_knowledge": { "type": "string" }
}
},
"key_matches": {
"type": "array",
"items": { "type": "string" }
},
"gaps": {
"type": "array",
"items": {
"type": "object",
"required": ["requirement", "your_status", "priority", "suggestion"],
"additionalProperties": false,
"properties": {
"requirement": { "type": "string" },
"your_status": { "type": "string" },
"priority": { "type": "string", "enum": ["high", "medium", "low"] },
"suggestion": { "type": "string" }
}
}
},
"tailoring_suggestions": {
"type": "array",
"items": {
"type": "object",
"required": [
"section",
"current_bullet",
"job_focus",
"suggested_rewrite",
"why_better"
],
"additionalProperties": false,
"properties": {
"section": { "type": "string" },
"current_bullet": { "type": "string" },
"job_focus": { "type": "string" },
"suggested_rewrite": { "type": "string" },
"why_better": { "type": "string" }
}
}
},
"priority_roadmap": {
"type": "array",
"items": { "type": "string" }
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Reject vacuous schema-valid outputs.

Most required strings and arrays can still be empty, so { "job_title": "", "key_matches": [], ... } can pass shape validation while carrying no alignment signal. Consider adding minLength: 1 to required strings and minItems: 1 where the skill should always emit at least one finding/suggestion.

🛡️ Proposed schema hardening pattern
-    "job_title": { "type": "string" },
-    "resume_analyzed": { "type": "string" },
-    "overall_alignment": { "type": "string" },
+    "job_title": { "type": "string", "minLength": 1 },
+    "resume_analyzed": { "type": "string", "minLength": 1 },
+    "overall_alignment": { "type": "string", "minLength": 1 },
...
     "key_matches": {
       "type": "array",
+      "minItems": 1,
       "items": { "type": "string" }
     },
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"job_title": { "type": "string" },
"resume_analyzed": { "type": "string" },
"overall_alignment": { "type": "string" },
"alignment_breakdown": {
"type": "object",
"required": [
"required_qualifications",
"nice_to_have",
"technical_skills",
"soft_skills",
"domain_knowledge"
],
"additionalProperties": false,
"properties": {
"required_qualifications": { "type": "string" },
"nice_to_have": { "type": "string" },
"technical_skills": { "type": "string" },
"soft_skills": { "type": "string" },
"domain_knowledge": { "type": "string" }
}
},
"key_matches": {
"type": "array",
"items": { "type": "string" }
},
"gaps": {
"type": "array",
"items": {
"type": "object",
"required": ["requirement", "your_status", "priority", "suggestion"],
"additionalProperties": false,
"properties": {
"requirement": { "type": "string" },
"your_status": { "type": "string" },
"priority": { "type": "string", "enum": ["high", "medium", "low"] },
"suggestion": { "type": "string" }
}
}
},
"tailoring_suggestions": {
"type": "array",
"items": {
"type": "object",
"required": [
"section",
"current_bullet",
"job_focus",
"suggested_rewrite",
"why_better"
],
"additionalProperties": false,
"properties": {
"section": { "type": "string" },
"current_bullet": { "type": "string" },
"job_focus": { "type": "string" },
"suggested_rewrite": { "type": "string" },
"why_better": { "type": "string" }
}
}
},
"priority_roadmap": {
"type": "array",
"items": { "type": "string" }
}
"job_title": { "type": "string", "minLength": 1 },
"resume_analyzed": { "type": "string", "minLength": 1 },
"overall_alignment": { "type": "string", "minLength": 1 },
"alignment_breakdown": {
"type": "object",
"required": [
"required_qualifications",
"nice_to_have",
"technical_skills",
"soft_skills",
"domain_knowledge"
],
"additionalProperties": false,
"properties": {
"required_qualifications": { "type": "string" },
"nice_to_have": { "type": "string" },
"technical_skills": { "type": "string" },
"soft_skills": { "type": "string" },
"domain_knowledge": { "type": "string" }
}
},
"key_matches": {
"type": "array",
"minItems": 1,
"items": { "type": "string" }
},
"gaps": {
"type": "array",
"items": {
"type": "object",
"required": ["requirement", "your_status", "priority", "suggestion"],
"additionalProperties": false,
"properties": {
"requirement": { "type": "string" },
"your_status": { "type": "string" },
"priority": { "type": "string", "enum": ["high", "medium", "low"] },
"suggestion": { "type": "string" }
}
}
},
"tailoring_suggestions": {
"type": "array",
"items": {
"type": "object",
"required": [
"section",
"current_bullet",
"job_focus",
"suggested_rewrite",
"why_better"
],
"additionalProperties": false,
"properties": {
"section": { "type": "string" },
"current_bullet": { "type": "string" },
"job_focus": { "type": "string" },
"suggested_rewrite": { "type": "string" },
"why_better": { "type": "string" }
}
}
},
"priority_roadmap": {
"type": "array",
"items": { "type": "string" }
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/resume-job-alignment/schema/output.schema.json` around lines 19 - 82,
The schema currently allows empty strings/arrays (e.g., "job_title",
"resume_analyzed", "overall_alignment", the properties inside
"alignment_breakdown", and array fields "key_matches", "gaps",
"tailoring_suggestions", "priority_roadmap"), which permits vacuous outputs;
update those required string properties (job_title, resume_analyzed,
overall_alignment, alignment_breakdown.required_qualifications, nice_to_have,
technical_skills, soft_skills, domain_knowledge, and all string props inside gap
items and tailoring_suggestions items) to include "minLength": 1, and add
"minItems": 1 to arrays that must return at least one element (key_matches,
gaps, tailoring_suggestions, priority_roadmap) so the validator rejects
empty/blank results while keeping existing "additionalProperties": false and the
"priority" enum unchanged.

Comment on lines +6 to +7
"prompt": "Edit these sentences for clarity:\n\n1. The machine learning algorithm that was developed by researchers over a two-year period and was trained on massive datasets containing millions of images was evaluated on the ImageNet benchmark.\n\n2. This approach has been shown to have better performance than previous methods.\n\n3. When the data was processed, it was normalized and then it was used to train the model.",
"expected_output": "JSONL with sentence improvements: sentence_length (47+ words → shortened), weak_verb (passive voice converted to active), ambiguous_pronoun (clarify 'it' and 'this')"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix the inaccurate word-count expectation.

The first sentence is not 47+ words; keeping that number in the expected output can make the eval reward a false finding. Use a non-numeric “long sentence” expectation or correct the count.

🧪 Proposed eval text fix
-      "expected_output": "JSONL with sentence improvements: sentence_length (47+ words → shortened), weak_verb (passive voice converted to active), ambiguous_pronoun (clarify 'it' and 'this')"
+      "expected_output": "JSONL with sentence improvements: sentence_length (long sentence → shortened), weak_verb (passive voice converted to active), ambiguous_pronoun (clarify 'it' and 'this')"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"prompt": "Edit these sentences for clarity:\n\n1. The machine learning algorithm that was developed by researchers over a two-year period and was trained on massive datasets containing millions of images was evaluated on the ImageNet benchmark.\n\n2. This approach has been shown to have better performance than previous methods.\n\n3. When the data was processed, it was normalized and then it was used to train the model.",
"expected_output": "JSONL with sentence improvements: sentence_length (47+ words → shortened), weak_verb (passive voice converted to active), ambiguous_pronoun (clarify 'it' and 'this')"
"prompt": "Edit these sentences for clarity:\n\n1. The machine learning algorithm that was developed by researchers over a two-year period and was trained on massive datasets containing millions of images was evaluated on the ImageNet benchmark.\n\n2. This approach has been shown to have better performance than previous methods.\n\n3. When the data was processed, it was normalized and then it was used to train the model.",
"expected_output": "JSONL with sentence improvements: sentence_length (long sentence → shortened), weak_verb (passive voice converted to active), ambiguous_pronoun (clarify 'it' and 'this')"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/sentence-clarity-cs/evals/evals.json` around lines 6 - 7, The
expected_output entry mistakenly asserts "47+ words" for the first sentence
length; update the "expected_output" value in the evals.json block so it no
longer contains the inaccurate numeric count—either replace "47+ words →
shortened" with a non-numeric descriptor like "long sentence → shortened" or
compute and insert the correct word count, and ensure the string still
references the other checks (weak_verb, ambiguous_pronoun) so the keys "prompt"
and "expected_output" remain consistent.

Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

7 issues found across 32 files

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="skills/academic-final-review-cs/SKILL.md">

<violation number="1" location="skills/academic-final-review-cs/SKILL.md:135">
P1: Conflicting output instructions can cause non-enum LaTeX checklist items to be emitted, breaking strict schema validation.</violation>

<violation number="2" location="skills/academic-final-review-cs/SKILL.md:250">
P3: This instruction conflicts with the output contract: checklist status is tri-state (`PASS`, `FAIL`, `WARN`), not binary. Reword it to avoid inconsistent status generation.</violation>
</file>

<file name="skills/sentence-clarity-cs/evals/evals.json">

<violation number="1" location="skills/sentence-clarity-cs/evals/evals.json:7">
P2: Expected-output rubric has an incorrect word-count claim ("47+ words") for sentence 1, creating inaccurate eval/scoring guidance.</violation>
</file>

<file name="skills/paper-structure-cs/SKILL.md">

<violation number="1" location="skills/paper-structure-cs/SKILL.md:31">
P2: The skill asks for checks (flow/transitions, section balance) that cannot be represented by the schema’s closed `type` enum, leading to dropped or misclassified outputs.</violation>

<violation number="2" location="skills/paper-structure-cs/SKILL.md:60">
P2: The required-section definition is inconsistent: `missing_section` omits Introduction while other sections treat Introduction as required.</violation>

<violation number="3" location="skills/paper-structure-cs/SKILL.md:71">
P2: The spec inconsistently treats Discussion as both required and optional, which can cause false positive structural violations for venue-valid papers.</violation>
</file>

<file name="skills/resume-job-alignment/schema/output.schema.json">

<violation number="1" location="skills/resume-job-alignment/schema/output.schema.json:21">
P2: `overall_alignment` is modeled as an unconstrained string even though project docs/evals treat it as a percentage score, allowing invalid non-score text to pass schema validation.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.


## LaTeX-Specific Checks (If Applicable)

If the paper is written in LaTeX, additionally check:
Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai bot Apr 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: Conflicting output instructions can cause non-enum LaTeX checklist items to be emitted, breaking strict schema validation.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At skills/academic-final-review-cs/SKILL.md, line 135:

<comment>Conflicting output instructions can cause non-enum LaTeX checklist items to be emitted, breaking strict schema validation.</comment>

<file context>
@@ -0,0 +1,268 @@
+
+## LaTeX-Specific Checks (If Applicable)
+
+If the paper is written in LaTeX, additionally check:
+
+| Item | What to verify |
</file context>
Fix with Cubic

{
"id": 1,
"prompt": "Edit these sentences for clarity:\n\n1. The machine learning algorithm that was developed by researchers over a two-year period and was trained on massive datasets containing millions of images was evaluated on the ImageNet benchmark.\n\n2. This approach has been shown to have better performance than previous methods.\n\n3. When the data was processed, it was normalized and then it was used to train the model.",
"expected_output": "JSONL with sentence improvements: sentence_length (47+ words → shortened), weak_verb (passive voice converted to active), ambiguous_pronoun (clarify 'it' and 'this')"
Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai bot Apr 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Expected-output rubric has an incorrect word-count claim ("47+ words") for sentence 1, creating inaccurate eval/scoring guidance.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At skills/sentence-clarity-cs/evals/evals.json, line 7:

<comment>Expected-output rubric has an incorrect word-count claim ("47+ words") for sentence 1, creating inaccurate eval/scoring guidance.</comment>

<file context>
@@ -0,0 +1,20 @@
+    {
+      "id": 1,
+      "prompt": "Edit these sentences for clarity:\n\n1. The machine learning algorithm that was developed by researchers over a two-year period and was trained on massive datasets containing millions of images was evaluated on the ImageNet benchmark.\n\n2. This approach has been shown to have better performance than previous methods.\n\n3. When the data was processed, it was normalized and then it was used to train the model.",
+      "expected_output": "JSONL with sentence improvements: sentence_length (47+ words → shortened), weak_verb (passive voice converted to active), ambiguous_pronoun (clarify 'it' and 'this')"
+    },
+    {
</file context>
Fix with Cubic


## Issue Categories

- `missing_section` - Required section (Abstract, Methods, Results, Discussion, Conclusion) not found
Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai bot Apr 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: The required-section definition is inconsistent: missing_section omits Introduction while other sections treat Introduction as required.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At skills/paper-structure-cs/SKILL.md, line 60:

<comment>The required-section definition is inconsistent: `missing_section` omits Introduction while other sections treat Introduction as required.</comment>

<file context>
@@ -0,0 +1,581 @@
+
+## Issue Categories
+
+- `missing_section` - Required section (Abstract, Methods, Results, Discussion, Conclusion) not found
+- `section_order` - Sections in wrong sequence (e.g., Results before Methods)
+- `heading_skip` - Heading hierarchy violates standard (# → ## → ###, no skips to ###)
</file context>
Fix with Cubic

- Heading levels follow proper hierarchy (no # → ### skips)
- Bibliography present and complete
- Table of Contents (if present) matches actual sections
- Section transitions and flow are logical
Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai bot Apr 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: The skill asks for checks (flow/transitions, section balance) that cannot be represented by the schema’s closed type enum, leading to dropped or misclassified outputs.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At skills/paper-structure-cs/SKILL.md, line 31:

<comment>The skill asks for checks (flow/transitions, section balance) that cannot be represented by the schema’s closed `type` enum, leading to dropped or misclassified outputs.</comment>

<file context>
@@ -0,0 +1,581 @@
+- Heading levels follow proper hierarchy (no # → ### skips)
+- Bibliography present and complete
+- Table of Contents (if present) matches actual sections
+- Section transitions and flow are logical
+- Heading naming conventions are consistent
+- Section balance and proportionality (no single 20-page Methods section, etc.)
</file context>
Fix with Cubic


## How to Evaluate

1. **Required sections**: Check for Abstract, Introduction, Methods, Results, Discussion, Conclusion
Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai bot Apr 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: The spec inconsistently treats Discussion as both required and optional, which can cause false positive structural violations for venue-valid papers.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At skills/paper-structure-cs/SKILL.md, line 71:

<comment>The spec inconsistently treats Discussion as both required and optional, which can cause false positive structural violations for venue-valid papers.</comment>

<file context>
@@ -0,0 +1,581 @@
+
+## How to Evaluate
+
+1. **Required sections**: Check for Abstract, Introduction, Methods, Results, Discussion, Conclusion
+2. **Order**: Verify logical flow (Abstract → Intro → Methods → Results → Discussion → Conclusion → Bibliography)
+3. **Headings**: Ensure proper nesting (no jumps; consistent levels for parallel sections)
</file context>
Fix with Cubic

"properties": {
"job_title": { "type": "string" },
"resume_analyzed": { "type": "string" },
"overall_alignment": { "type": "string" },
Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai bot Apr 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: overall_alignment is modeled as an unconstrained string even though project docs/evals treat it as a percentage score, allowing invalid non-score text to pass schema validation.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At skills/resume-job-alignment/schema/output.schema.json, line 21:

<comment>`overall_alignment` is modeled as an unconstrained string even though project docs/evals treat it as a percentage score, allowing invalid non-score text to pass schema validation.</comment>

<file context>
@@ -0,0 +1,84 @@
+  "properties": {
+    "job_title": { "type": "string" },
+    "resume_analyzed": { "type": "string" },
+    "overall_alignment": { "type": "string" },
+    "alignment_breakdown": {
+      "type": "object",
</file context>
Fix with Cubic


## Guidelines

- **Binary thinking**: Each item is binary (present/absent, consistent/inconsistent). Avoid ambiguous status.
Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai bot Apr 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P3: This instruction conflicts with the output contract: checklist status is tri-state (PASS, FAIL, WARN), not binary. Reword it to avoid inconsistent status generation.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At skills/academic-final-review-cs/SKILL.md, line 250:

<comment>This instruction conflicts with the output contract: checklist status is tri-state (`PASS`, `FAIL`, `WARN`), not binary. Reword it to avoid inconsistent status generation.</comment>

<file context>
@@ -0,0 +1,268 @@
+
+## Guidelines
+
+- **Binary thinking**: Each item is binary (present/absent, consistent/inconsistent). Avoid ambiguous status.
+- **Actionability**: Guidance must tell author exactly what to fix (line numbers, specific changes)
+- **Completeness**: Perform the full checklist; don't skip items
</file context>
Fix with Cubic

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants